Research Article | Open Access
Volume 2025 |Article ID 100028 | https://doi.org/10.1016/j.plaphe.2025.100028

Soybean yield estimation and lodging discrimination based on lightweight UAV and point cloud deep learning

Longyu Zhou,1,7 Dezhi Han,2,7 Guangyao Sun,3 Yaling Liu,4 Xiaofei Yan,2,7 Hongchang Jia,2,7 Long Yan,5 Puyu Feng,1 Yinghui Li,6 Lijuan Qiu,6 and Yuntao Ma 1

1College of Land Science and Technology, China Agricultural University, Beijing, 100193, China
2Heihe Branch of Heilongjiang Academy of Agricultural Sciences, Heihe, China
3College of Information and Electrical Engineering, China Agricultural University, Beijing 100193, China
4Inner Mongolia Pratacultural Technology Innovation Center Co. Ltd, Inner Mongolia, China
5Institute of Cereal and Oil Crops, Hebei Academy of Agricultural and Forestry Sciences, Shijiazhuang, Hebei, 050035, China
6State Key Laboratory of Crop Gene Resources and Breeding, Institute of Crop Science, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
7The first two authors contributed equally to this work.

Received 
25 Jul 2024
Accepted 
02 Feb 2025
Published
20 Mar 2025

Abstract

The unmanned aerial vehicle (UAV) platform has emerged as a powerful tool in soybean (Glycine max (L.) Merr.) breeding phenotype research due to its high throughput and adaptability. However, previous studies have predominantly relied on statistical features like vegetation indices and textures, overlooking the crucial structural information embedded in the data. Feature fusion has often been confined to a one-dimensional exponential form, which can decouple spatial and spectral information and neglect their interactions at the data level. In this study, we leverage our team's cross-circling oblique (CCO) route photography and Structure-from-Motion with Multi-View Stereo (SfM-MVS) techniques to reconstruct the three-dimensional (3D) structure of soybean canopies. Newly point cloud deep learning models SoyNet and SoyNet-Res were further created with two novel data-level fusion that integrate spatial structure and color information. Our results reveal that incorporating RGB color and vegetation index (VI) spectral information with spatial structure information, leads to a significant reduction in root mean square error (RMSE) for yield estimation (22.55 kg ha−1) and an improvement in F1-score for five-class lodging discrimination (0.06) at S7 growth stage. The SoyNet-Res model employing multi-task learning exhibits better accuracy in both yield estimation (RMSE: 349.45 kg ha−1) when compared to the H2O-AutoML. Furthermore, our findings indicate that multi-task deep learning outperforms single-task learning in lodging discrimination, achieving an accuracy top-2 of 0.87 and accuracy top-3 of 0.97 for five-class. In conclusion, the point cloud deep learning method exhibits tremendous potential in learning multi-phenotype tasks, laying the foundation for optimizing soybean breeding programs.

© 2019-2023   Plant Phenomics. All rights Reserved.  ISSN 2643-6515.

Back to top